Goto

Collaborating Authors

 buolamwini and gebru


The unseen Black faces of AI algorithms

#artificialintelligence

Data sets are essential for training and validating machine-learning algorithms. But these data are typically sourced from the Internet, so they encode all the stereotypes, inequalities and power asymmetries that exist in society. These biases are exacerbated by the algorithmic systems that use them, which means that the output of the systems is discriminatory by nature, and will remain problematic and potentially harmful until the data sets are audited and somehow corrected. Although this has long been the case, the first major steps towards overcoming the issue were taken only four years ago, when Joy Buolamwini and Timnit Gebru1 published a report that kick-started sweeping changes in the ethics of artificial intelligence (AI). As a graduate student in computer science, Buolamwini was frustrated that commercial facial-recognition systems failed to identify her face in photographs and video footage.


Using Artificial Intelligence in Administrative Agencies

#artificialintelligence

ACUS issues a statement to help agencies make more informed decisions about artificial intelligence. Federal agencies increasingly rely on artificial intelligence (AI) tools to do their work and carry out their missions. Nearly half the federal agencies surveyed for a recent report commissioned by the Administrative Conference of the United States (ACUS) employ or have experimented with AI tools. The agencies used AI tools across an array of governance tasks, including adjudication, enforcement, data collection and analysis, internal management, and public communications. Agencies' interest in AI tools is not surprising.


Oxford Handbook on AI Ethics Book Chapter on Race and Gender

Gebru, Timnit

arXiv.org Artificial Intelligence

From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in people's lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy "Man is to computer programmer as woman is to X" by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.